Goto

Collaborating Authors

 Surprise


Metadata Exposes Authors of ICE's 'Mega' Detention Center Plans

WIRED

Comments and other data left on a PDF detailing Homeland Security's proposal to build "mega" detention and processing centers reveal the personnel involved in its creation. A PDF that Department of Homeland Security officials provided to New Hampshire governor Kelly Ayotte's office about a new effort to build "mega" detention and processing centers across the United States contains embedded comments and metadata identifying the people who worked on it. The seemingly accidental exposure of the identities of DHS personnel who crafted Immigration and Customs Enforcement's mega detention center plan lands amid widespread public pushback against the expansion of ICE detention centers and the department's brutal immigration enforcement tactics. Metadata in the document, which concerns ICE's "Detention Reengineering Initiative" (DRI), lists as its author Jonathan Florentino, the director of ICE's Newark, New Jersey, Field Office of Enforcement and Removal Operations. In a note embedded on top of an FAQ question, "What is the average length of stay for the aliens?"


Domain-Generalization to Improve Learning in Meta-Learning Algorithms

Anjum, Usman, Stockman, Chris, Luong, Cat, Zhan, Justin

arXiv.org Artificial Intelligence

This paper introduces Domain Generalization Sharpness-Aware Minimization Model-Agnostic Meta-Learning (DGS-MAML), a novel meta-learning algorithm designed to generalize across tasks with limited training data. DGS-MAML combines gradient matching with sharpness-aware minimization in a bi-level optimization framework to enhance model adaptability and robustness. We support our method with theoretical analysis using PAC-Bayes and convergence guarantees. Experimental results on benchmark datasets show that DGS-MAML outperforms existing approaches in terms of accuracy and generalization. The proposed method is particularly useful for scenarios requiring few-shot learning and quick adaptation, and the source code is publicly available at GitHub.


How Walmart is going all in on artificial intelligence

#artificialintelligence

As the digital era continues to turn retail on its head, Walmart's story is particularly interesting, as the brand is somehow both the challenger and the incumbent. Walmart has been the world's largest retailer since 1988, but as Sears proved, prominence isn't permanent. The retailer had a series of disappointing quarters, seemingly on track to become Amazon's biggest casualty. Galagher Jeff, the company's VP of Merchandising Operations and Business Analytics, even said so when he spoke at NRF 2019: Retail's Big Show in New York City earlier this week. "We had a business that was successful and we stopped taking risks," admits Jeff.


Apple's secretive fleet of self-driving cars has almost doubled

Daily Mail - Science & tech

Apple seems to speeding ahead with its self-driving car program. The iPhone maker has nearly doubled its fleet of autonomous test vehicles in California over the last few months, according to the Financial Times. In January, Apple was operating 27 self-driving cars on the roads, but that number has since grown to 45 vehicles, data from California's Department of Motor Vehicles shows. The new report comes as some firms have suspended their autonomous driving tests following a fatal accident involving an Uber self-driving car this weekend. If Apple's fleet has increased this much, it has surged ahead of its rivals in terms of the size of its test fleet.


Online Tensor Methods for Learning Latent Variable Models

Huang, Furong, Niranjan, U. N., Hakeem, Mohammad Umar, Anandkumar, Animashree

arXiv.org Machine Learning

We introduce an online tensor decomposition based approach for two latent variable modeling problems namely, (1) community detection, in which we learn the latent communities that the social actors in social networks belong to, and (2) topic modeling, in which we infer hidden topics of text articles. We consider decomposition of moment tensors using stochastic gradient descent. We conduct optimization of multilinear operations in SGD and avoid directly forming the tensors, to save computational and storage costs. We present optimized algorithm in two platforms. Our GPU-based implementation exploits the parallelism of SIMD architectures to allow for maximum speed-up by a careful optimization of storage and data transfer, whereas our CPU-based implementation uses efficient sparse matrix computations and is suitable for large sparse datasets. For the community detection problem, we demonstrate accuracy and computational efficiency on Facebook, Yelp and DBLP datasets, and for the topic modeling problem, we also demonstrate good performance on the New York Times dataset. We compare our results to the state-of-the-art algorithms such as the variational method, and report a gain of accuracy and a gain of several orders of magnitude in the execution time.